Search Results: "storm"

14 September 2021

Sven Hoexter: PV - Monitoring Envertech Microinverter via envertecportal.com

Some time ago I looked briefly at an Envertech data logger for small scale photovoltaic setups. Turned out that PV inverter are kinda unreliable, and you really have to monitor them to notice downtimes and defects. Since my pal shot for a quick win I've cobbled together another Python script to query the portal at www.envertecportal.com, and report back if the generated power is down to 0. The script is currently run on a vserver via cron and reports back via the system MTA. So yeah, you need to have something like that already at hand. Script and Configuration You've to provide your PV systems location with latitude and longitude so the script can calculate (via python3-suntime) the sunrise and sunset times. At the location we deal with we expect to generate some power at least from sunrise + 1h to sunet - 1h. That is tunable via the configuration option toleranceSeconds. Retrieving the stationId is a bit ugly because it's not provided via any API, instead it's rendered serverside into the website. So I just logged in on the portal and picked it up by looking into the page source. www.envertecportal.com API I guess this is some classic in the IoT land, but neither the documentation provided on the portal frontpage as docx, nor the API docs at port 8090 are complete and correct. The few bits I gathered via the Firefox Web Developer Tools are:
  1. Login https://www.envertecportal.com/apiaccount/login - POST, sent userName and pwd containing your login name and password. The response JSON is very explicit if your login was not successful and why.
  2. Store the session cookie called ASP.NET_SessionId for use on all subsequent requests.
  3. Retrieve station info https://www.envertecportal.com/ApiStations/getStationInfo - POST, sent ASP.NET_SessionId and stationId with the ID of the station. Returns a JSON with an object named Data. The field Power contains the currently generated power as a float with two digits (e.g. 0.01).
  4. Logout https://www.envertecportal.com/apiAccount/Logout - POST, sent ASP.NET_SessionId.
Some Surprises There were a few surprises, maybe they help others dealing with an Envertech setup.
  1. The portal truncates passwords at 16 chars.
  2. The "Forget Password?" function mails you back the password in plain text (that's how I learned about 1.).
  3. The login API endpoint reporting the exact reason why the login failed is somewhat out of fashion. Though this one is probably not a credential stuffing target because there is no money to make, so don't care.
  4. The data logger reports the data to www.envertecportal.com at port 10013.
  5. There is some checksuming done on the reported data, but the system is not replay safe. So you can sent it any valid data string at a later time and get wrong data recorded.
  6. People at forum.fhem.de decoded some values but could not figure out the checksuming so far.

18 August 2021

Norbert Preining: Debian KDE/Plasma Status 2021-08-18

Bullseye has been released, and we are in the post-release rush with lots of changes going on. On the KDE/Plasma side we are trying to move all our accumulated changes to unstable. On the OSC side, frameworks 5.85 and KDE Gears 21.08.0 have been released. Debian Bullseye As mentioned previously, the now released Debian/Bullseye contains KDE Frameworks 5.78, including several backports of fixes from 5.79 to get smooth operation. Plasma 5.20.5, again with several cherry picks for bugs will be in Bullseye, too. The KDE/Apps are mostly at 20.12 level, and the KDE PIM group packages (akonadi, kmail, etc) are at 20.08. Debian unstable (and in time also testing) Frameworks 5.83 and Plasma 5.21.5 have been uploaded to unstable. This is a temporary measure until several necessary packages have cleared the NEW queue. After that we will upload frameworks 5.85 and Plasma 5.22.N. KDE Gears is still at 20.08, but we have 21.04 in experimental, and I am currently preparing to upload 21.08.0, should be done soon. OBS packages The OBS packages as usual follow the latest release, and currently ship KDE Frameworks 5.85, KDE Gears 21.08.0, and Plasma 5.22.4. The package sources are as usual (note the different path for the Plasma packages and the apps packages, containing the release version!), for Debian/unstable:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma522/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2108/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./
and the same with Testing instead of Unstable for Debian/testing. OBS for bullseye For now, use the Testing packages with the addition dependencies:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma522/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2108/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Testing/ ./
Sooner or later OBS will offer Debian Bullseye as proper target, and then I will prepare releases for it and write another blog post. Closing Expect continued breakage of the next weeks until the upload storm subsides a bit.

16 August 2021

Russ Allbery: Review: The Galaxy, and the Ground Within

Review: The Galaxy, and the Ground Within, by Becky Chambers
Series: Wayfarers #4
Publisher: Harper Voyager
Copyright: April 2021
ISBN: 0-06-293605-0
Format: Kindle
Pages: 325
The name of the planet Gora is the Hanto word for useless. It's a bone-dry, resource-poor planet with nothing to recommend it except that it happened to be conveniently located between five other busy systems and had well-established interspacial tunnels. Gora is therefore a transit hub: a place lots of people visit for about half a day while waiting for a departure time, but where very few people stay. It is the interstellar equivalent of an airport. Ouloo is a Laru, a species physically somewhat similar to a giant dog. She is the owner of the Five-Hop One-Stop in a habitat dome on Gora, where she lives with her child (who is not yet old enough to choose a gender). On the day when this novel begins, she's expecting three ships to dock: one Aeluon, one Quelin, and, to Ouloo's significant surprise and moderate discomfort, an Akarak. But apart from that, it's a normal day. A normal day, that is, until maintenance work on the solar satellite array leads to a Kessler syndrome collision cascade that destroys most of the communication satellites and makes it unsafe to leave the surface of the planet. Ouloo and her guests are stuck with each other for longer than they expected. In a typical SF novel, you would expect the characters to have to fix the satellite cascade, or for it to be a sign of something more nefarious. That is not the case here; the problem is handled by the Goran authorities, the characters have no special expertise, and there is no larger significance to the accident. Instead, the accident functions as storm in a very old story-telling frame: three travelers and their host and her child, trapped together by circumstance and forced to entertain each other until they can go on their way. Breaking from the formula, they do not do that primarily by telling stories to each other, although the close third-person narration that moves between the characters reveals their backgrounds over the course of the book. Instead, a lot of this book is conversation, sometimes prompted by Ouloo's kid Tupo (who I thought was a wonderfully-written tween, complete with swings between curiosity and shyness, random interests, occasionally poor impulse control, and intense but unpredictable learning interest). That leads to some conflict (and some emergencies), but, similar to Record of a Spaceborn Few, this is more of a character study book than a things-happen book. An interesting question, then, is why is this story science fiction? A similar story could be written (and has been, many times) with human travelers in a mundane inn or boarding house in a storm. None of the aliens are all that alien; despite having different body shapes and senses, you could get more variation from a group of university students. And even more than with Chambers's other books, the advanced technology is not the point and is described only enough to provide some background color and a bit of characterization. The answer, for me, is that the cognitive estrangement of non-human characters relieves my brain of the baggage that I bring to human characters and makes it easier for me to empathize with the characters as individuals rather than representatives of human archetypes. With human characters, I would be fitting them into my knowledge of history and politics, and my reaction to each decision the characters make would be influenced by the assumptions prompted by that background. I enjoy the distraction of invented worlds and invented histories in part because they're simplified compared to human histories and therefore feel more knowable and less subtle. I'm not trying to understand the political angle from which the author is writing or wondering if I'm missing a reference that's important to the story. In other words, the science fiction setting gives the narrator more power. The story tells me the important details of the background; there isn't some true history lurking beneath that I'm trying to ferret out. When that's combined with interesting physical differences, I find myself imagining what it would be like to be the various aliens, trying to insert myself into their worlds, rather than placing them in a historical or political context. That puts me in a curious and empathetic mindset, and that, in turn, is the best perspective from which to enjoy Chambers's stories. The characters in this story don't solve any large-scale problems. They do make life decisions, some quite significant, but only on a personal scale. They also don't resolve all of their suspicions and disagreements. This won't be to everyone's taste, but it's one of the things I most enjoyed about the book: it shows a small part of the lives of a collection of average individuals, none of whom are close to the levers of power and none of whom are responsible for fixing their species or galactic politics. They are responsible for their own choices, and for how their lives touch the lives of others. They can make the people they encounter happier or sadder, they can chose how to be true to their own principles, and they can make hard choices without right answers. When I describe a mainstream fiction book that way, I often find it depressing, but I came away from The Galaxy, and the Ground Within feeling better about the world and more open-hearted towards other people. I'm not sure what Chambers does to produce that reaction, so I'm not sure if it will have the same effect on other people. Perhaps part of it is that while there is some drama, her characters do not seek drama for its own sake, none of the characters are villains, and she has a way of writing sincerity that clicks with my brain. There is a scene, about two-thirds of the way through the book, where the characters get into a heated argument about politics, and for me this is the moment where you will either love this book or it will not work for you. The argument doesn't resolve anything, and yet it's one of the most perceptive, accurate, and satisfying portrayals of a political argument among normal people that I've seen in fiction. It's the sort of air-clearing conversation in which every character is blunt with both their opinion and their emotions rather than shading them for politeness. Those positions are not necessarily sophisticated or deeply philosophical, but they are deeply honest.
"And you know what? I truly don't care which of them is right so long as it fixes everything. I don't have an... an ideology. I don't know the right terms to discuss these things. I don't know the science behind any of it. I'm sure I sound silly right now. But I just want everyone to get along, and to be well taken care of. That's it. I want everybody to be happy and I do not care how we get there." She exhaled, her broad nostrils flaring. "That's how I feel about it."
I am not Ouloo, but I think she represents far more people than fiction normally realizes, and I found something deeply satisfying and revealing in seeing that position presented so clearly in the midst of a heated argument. If you like what Chambers does, I think you will like this book. If it's not for you, this is probably not the book that will change your mind, although there is a bit less hand-wavy technology to distract the people whom that distracts. The Galaxy, and the Ground Within didn't have the emotional resonance that Record of a Spaceborn Few had for me, or the emotional gut punch of A Closed and Common Orbit. But I loved every moment of reading it. This will apparently be the last novel in the Wayfarers universe, at least for the time being. Chambers will be moving on to other settings (starting with A Psalm for the Wild-Built). Rating: 8 out of 10

2 August 2021

Colin Watson: Launchpad now runs on Python 3!

After a very long porting journey, Launchpad is finally running on Python 3 across all of our systems. I wanted to take a bit of time to reflect on why my emotional responses to this port differ so much from those of some others who ve done large ports, such as the Mercurial maintainers. It s hard to deny that we ve had to burn a lot of time on this, which I m sure has had an opportunity cost, and from one point of view it s essentially running to stand still: there is no single compelling feature that we get solely by porting to Python 3, although it s clearly a prerequisite for tidying up old compatibility code and being able to use modern language facilities in the future. And yet, on the whole, I found this a rewarding project and enjoyed doing it. Some of this may be because by inclination I m a maintenance programmer and actually enjoy this sort of thing. My default view tends to be that software version upgrades may be a pain but it s much better to get that pain over with as soon as you can rather than trying to hold back the tide; you can certainly get involved and try to shape where things end up, but rightly or wrongly I can t think of many cases when a righteously indignant user base managed to arrange for the old version to be maintained in perpetuity so that they never had to deal with the new thing (OK, maybe Perl 5 counts here). I think a more compelling difference between Launchpad and Mercurial, though, may be that very few other people really had a vested interest in what Python version Launchpad happened to be running, because it s all server-side code (aside from some client libraries such as launchpadlib, which were ported years ago). As such, we weren t trying to do this with the internet having Strong Opinions at us. We were doing this because it was obviously the only long-term-maintainable path forward, and in more recent times because some of our library dependencies were starting to drop support for Python 2 and so it was obviously going to become a practical problem for us sooner or later; but if we d just stayed on Python 2 forever then fundamentally hardly anyone else would really have cared directly, only maybe about some indirect consequences of that. I don t follow Mercurial development so I may be entirely off-base, but if other people were yelling at me about how late my project was to finish its port, that in itself would make me feel more negatively about the project even if I thought it was a good idea. Having most of the pressure come from ourselves rather than from outside meant that wasn t an issue for us. I m somewhat inclined to think of the process as an extreme version of paying down technical debt. Moving from Python 2.7 to 3.5, as we just did, means skipping over multiple language versions in one go, and if similar changes had been made more gradually it would probably have felt a lot more like the typical dependency update treadmill. I appreciate why not everyone might want to think of it this way: maybe this is just my own rationalization. Reflections on porting to Python 3 I m not going to defend the Python 3 migration process; it was pretty rough in a lot of ways. Nor am I going to spend much effort relitigating it here, as it s already been done to death elsewhere, and as I understand it the core Python developers have got the message loud and clear by now. At a bare minimum, a lot of valuable time was lost early in Python 3 s lifetime hanging on to flag-day-type porting strategies that were impractical for large projects, when it should have been providing for bilingual strategies (code that runs in both Python 2 and 3 for a transitional period) which is where most libraries and most large migrations ended up in practice. For instance, the early advice to library maintainers to maintain two parallel versions or perhaps translate dynamically with 2to3 was entirely impractical in most non-trivial cases and wasn t what most people ended up doing, and yet the idea that 2to3 is all you need still floats around Stack Overflow and the like as a result. (These days, I would probably point people towards something more like Eevee s porting FAQ as somewhere to start.) There are various fairly straightforward things that people often suggest could have been done to smooth the path, and I largely agree: not removing the u'' string prefix only to put it back in 3.3, fewer gratuitous compatibility breaks in the name of tidiness, and so on. But if I had a time machine, the number one thing I would ask to have been done differently would be introducing type annotations in Python 2 before Python 3 branched off. It s true that it s technically possible to do type annotations in Python 2, but the fact that it s a different syntax that would have to be fixed later is offputting, and in practice it wasn t widely used in Python 2 code. To make a significant difference to the ease of porting, annotations would need to have been introduced early enough that lots of Python 2 library code used them so that porting code didn t have to be quite so much of an exercise of manually figuring out the exact nature of string types from context. Launchpad is a complex piece of software that interacts with multiple domains: for example, it deals with a database, HTTP, web page rendering, Debian-format archive publishing, and multiple revision control systems, and there s often overlap between domains. Each of these tends to imply different kinds of string handling. Web page rendering is normally done mainly in Unicode, converting to bytes as late as possible; revision control systems normally want to spend most of their time working with bytes, although the exact details vary; HTTP is of course bytes on the wire, but Python s WSGI interface has some string type subtleties. In practice I found myself thinking about at least four string-like types (that is, things that in a language with a stricter type system I might well want to define as distinct types and restrict conversion between them): bytes, text, ordinary native strings (str in either language, encoded to UTF-8 in Python 2), and native strings with WSGI s encoding rules. Some of these are emergent properties of writing in the intersection of Python 2 and 3, which is effectively a specialized language of its own without coherent official documentation whose users must intuit its behaviour by comparing multiple sources of information, or by referring to unofficial porting guides: not a very satisfactory situation. Fortunately much of the complexity collapses once it becomes possible to write solely in Python 3. Some of the difficulties we ran into are not ones that are typically thought of as Python 2-to-3 porting issues, because they were changed later in Python 3 s development process. For instance, the email module was substantially improved in around the 3.2/3.3 timeframe to handle Python 3 s bytes/text model more correctly, and since Launchpad sends quite a few different kinds of email messages and has some quite picky tests for exactly what it emits, this entailed a lot of work in our email sending code and in our test suite to account for that. (It took me a while to work out whether we should be treating raw email messages as bytes or as text; bytes turned out to work best.) 3.4 made some tweaks to the implementation of quoted-printable encoding that broke a number of our tests in ways that took some effort to fix, because the tests needed to work on both 2.7 and 3.5. The list goes on. I got quite proficient at digging through Python s git history to figure out when and why some particular bit of behaviour had changed. One of the thorniest problems was parsing HTTP form data. We mainly rely on zope.publisher for this, which in turn relied on cgi.FieldStorage; but cgi.FieldStorage is badly broken in some situations on Python 3. Even if that bug were fixed in a more recent version of Python, we can t easily use anything newer than 3.5 for the first stage of our port due to the version of the base OS we re currently running, so it wouldn t help much. In the end I fixed some minor issues in the multipart module (and was kindly given co-maintenance of it) and converted zope.publisher to use it. Although this took a while to sort out, it seems to have gone very well. A couple of other interesting late-arriving issues were around pickle. For most things we normally prefer safer formats such as JSON, but there are a few cases where we use pickle, particularly for our session databases. One of my colleagues pointed out that I needed to remember to tell pickle to stick to protocol 2, so that we d be able to switch back and forward between Python 2 and 3 for a while; quite right, and we later ran into a similar problem with marshal too. A more surprising problem was that datetime.datetime objects pickled on Python 2 require special care when unpickling on Python 3; rather than the approach that ended up being implemented and documented for Python 3.6, though, I preferred a custom unpickler, both so that things would work on Python 3.5 and so that I wouldn t have to risk affecting the decoding of other pickled strings in the session database. General lessons Writing this over a year after Python 2 s end-of-life date, and certainly nowhere near the leading edge of Python 3 porting work, it s perhaps more useful to look at this in terms of the lessons it has for other large technical debt projects. I mentioned in my previous article that I used the approach of an enormous and frequently-rebased git branch as a working area for the port, committing often and sometimes combining and extracting commits for review once they seemed to be ready. A port of this scale would have been entirely intractable without a tool of similar power to git rebase, so I m very glad that we finished migrating to git in 2019. I relied on this right up to the end of the port, and it also allowed for quick assessments of how much more there was to land. git worktree was also helpful, in that I could easily maintain working trees built for each of Python 2 and 3 for comparison. As is usual for most multi-developer projects, all changes to Launchpad need to go through code review, although we sometimes make exceptions for very simple and obvious changes that can be self-reviewed. Since I knew from the outset that this was going to generate a lot of changes for review, I therefore structured my work from the outset to try to make it as easy as possible for my colleagues to review it. This generally involved keeping most changes to a somewhat manageable size of 800 lines or less (although this wasn t always possible), and arranging commits mainly according to the kind of change they made rather than their location. For example, when I needed to fix issues with / in Python 3 being true division rather than floor division, I did so in one commit across the various places where it mattered and took care not to mix it with other unrelated changes. This is good practice for nearly any kind of development, but it was especially important here since it allowed reviewers to consider a clear explanation of what I was doing in the commit message and then skim-read the rest of it much more quickly. It was vital to keep the codebase in a working state at all times, and deploy to production reasonably often: this way if something went wrong the amount of code we had to debug to figure out what had happened was always tractable. (Although I can t seem to find it now to link to it, I saw an account a while back of a company that had taken a flag-day approach instead with a large codebase. It seemed to work for them, but I m certain we couldn t have made it work for Launchpad.) I can t speak too highly of Launchpad s test suite, much of which originated before my time. Without a great deal of extensive coverage of all sorts of interesting edge cases at both the unit and functional level, and a corresponding culture of maintaining that test suite well when making new changes, it would have been impossible to be anything like as confident of the port as we were. As part of the porting work, we split out a couple of substantial chunks of the Launchpad codebase that could easily be decoupled from the core: its Mailman integration and its code import worker. Both of these had substantial dependencies with complex requirements for porting to Python 3, and arranging to be able to do these separately on their own schedule was absolutely worth it. Like disentangling balls of wool, any opportunity you can take to make things less tightly-coupled is probably going to make it easier to disentangle the rest. (I can see a tractable way forward to porting the code import worker, so we may well get that done soon. Our Mailman integration will need to be rewritten, though, since it currently depends on the Python-2-only Mailman 2, and Mailman 3 has a different architecture.) Python lessons Our database layer was already in pretty good shape for a port, since at least the modern bits of its table modelling interface were already strict about using Unicode for text columns. If you have any kind of pervasive low-level framework like this, then making it be pedantic at you in advance of a Python 3 port will probably incur much less swearing in the long run, as you won t be trying to deal with quite so many bytes/text issues at the same time as everything else. Early in our port, we established a standard set of __future__ imports and started incrementally converting files over to them, mainly because we weren t yet sure what else to do and it seemed likely to be helpful. absolute_import was definitely reasonable (and not often a problem in our code), and print_function was annoying but necessary. In hindsight I m not sure about unicode_literals, though. For files that only deal with bytes and text it was reasonable enough, but as I mentioned above there were also a number of cases where we needed literals of the language s native str type, i.e. bytes in Python 2 and text in Python 3: this was particularly noticeable in WSGI contexts, but also cropped up in some other surprising places. We generally either omitted unicode_literals or used six.ensure_str in such cases, but it was definitely a bit awkward and maybe I should have listened more to people telling me it might be a bad idea. A lot of Launchpad s early tests used doctest, mainly in the style where you have text files that interleave narrative commentary with examples. The development team later reached consensus that this was best avoided in most cases, but by then there were far too many doctests to conveniently rewrite in some other form. Porting doctests to Python 3 is really annoying. You run into all the little changes in how objects are represented as text (particularly u'...' versus '...', but plenty of other cases as well); you have next to no tools to do anything useful like skipping individual bits of a doctest that don t apply; using __future__ imports requires the rather obscure approach of adding the relevant names to the doctest s globals in the relevant DocFileSuite or DocTestSuite; dealing with many exception tracebacks requires something like zope.testing.renormalizing; and whatever code refactoring tools you re using probably don t work properly. Basically, don t have done that. It did all turn out to be tractable for us in the end, and I managed to avoid using much in the way of fragile doctest extensions aside from the aforementioned zope.testing.renormalizing, but it was not an enjoyable experience. Regressions I know of nine regressions that reached Launchpad s production systems as a result of this porting work; of course there were various other regressions caught by CI or in manual testing. (Considering the size of this project, I count it as a resounding success that there were only nine production issues, and that for the most part we were able to fix them quickly.) Equality testing of removed database objects One of the things we had to do while porting to Python 3 was to implement the __eq__, __ne__, and __hash__ special methods for all our database objects. This was quite conceptually fiddly, because doing this requires knowing each object s primary key, and that may not yet be available if we ve created an object in Python but not yet flushed the actual INSERT statement to the database (most of our primary keys are auto-incrementing sequences). We thus had to take care to flush pending SQL statements in such cases in order to ensure that we know the primary keys. However, it s possible to have a problem at the other end of the object lifecycle: that is, a Python object might still be reachable in memory even though the underlying row has been DELETEd from the database. In most cases we don t keep removed objects around for obvious reasons, but it can happen in caching code, and buildd-manager crashed as a result (in fact while it was still running on Python 2). We had to take extra care to avoid this problem. Debian imports crashed on non-UTF-8 filenames Python 2 has some unfortunate behaviour around passing bytes or Unicode strings (depending on the platform) to shutil.rmtree, and the combination of some porting work and a particular source package in Debian that contained a non-UTF-8 file name caused us to run into this. The fix was to ensure that the argument passed to shutil.rmtree is a str regardless of Python version. We d actually run into something similar before: it s a subtle porting gotcha, since it s quite easy to end up passing Unicode strings to shutil.rmtree if you re in the process of porting your code to Python 3, and you might easily not notice if the file names in your tests are all encoded using UTF-8. lazr.restful ETags We eventually got far enough along that we could switch one of our four appserver machines (we have quite a number of other machines too, but the appservers handle web and API requests) to Python 3 and see what happened. By this point our extensive test suite had shaken out the vast majority of the things that could go wrong, but there was always going to be room for some interesting edge cases. One of the Ubuntu kernel team reported that they were seeing an increase in 412 Precondition Failed errors in some of their scripts that use our webservice API. These can happen when you re trying to modify an existing resource: the underlying protocol involves sending an If-Match header with the ETag that the client thinks the resource has, and if this doesn t match the ETag that the server calculates for the resource then the client has to refresh its copy of the resource and try again. We initially thought that this might be legitimate since it can happen in normal operation if you collide with another client making changes to the same resource, but it soon became clear that something stranger was going on: we were getting inconsistent ETags for the same object even when it was unchanged. Since we d recently switched a quarter of our appservers to Python 3, that was a natural suspect. Our lazr.restful package provides the framework for our webservice API, and roughly speaking it generates ETags by serializing objects into some kind of canonical form and hashing the result. Unfortunately the serialization was dependent on the Python version in a few ways, and in particular it serialized lists of strings such as lists of bug tags differently: Python 2 used [u'foo', u'bar', u'baz'] where Python 3 used ['foo', 'bar', 'baz']. In lazr.restful 1.0.3 we switched to using JSON for this, removing the Python version dependency and ensuring consistent behaviour between appservers. Memory leaks This problem took the longest to solve. We noticed fairly quickly from our graphs that the appserver machine we d switched to Python 3 had a serious memory leak. Our appservers had always been a bit leaky, but now it wasn t so much a small hole that we can bail occasionally as the boat is sinking rapidly : A serious memory leak (Yes, this got in the way of working out what was going on with ETags for a while.) I spent ages messing around with various attempts to fix this. Since only a quarter of our appservers were affected, and we could get by on 75% capacity for a while, it wasn t urgent but it was definitely annoying. After spending some quality time with objgraph, for some time I thought traceback reference cycles might be at fault, and I sent a number of fixes to various upstream projects for those (e.g. zope.pagetemplate). Those didn t help the leaks much though, and after a while it became clear to me that this couldn t be the sole problem: Python has a cyclic garbage collector that will eventually collect reference cycles as long as there are no strong references to any objects in them, although it might not happen very quickly. Something else must be going on. Debugging reference leaks in any non-trivial and long-running Python program is extremely arduous, especially with ORMs that naturally tend to end up with lots of cycles and caches. After a while I formed a hypothesis that zope.server might be keeping a strong reference to something, although I never managed to nail it down more firmly than that. This was an attractive theory as we were already in the process of migrating to Gunicorn for other reasons anyway, and Gunicorn also has a convenient max_requests setting that s good at mitigating memory leaks. Getting this all in place took some time, but once we did we found that everything was much more stable: A rather flat memory graph This isn t completely satisfying as we never quite got to the bottom of the leak itself, and it s entirely possible that we ve only papered over it using max_requests: I expect we ll gradually back off on how frequently we restart workers over time to try to track this down. However, pragmatically, it s no longer an operational concern. Mirror prober HTTPS proxy handling After we switched our script servers to Python 3, we had several reports of mirror probing failures. (Launchpad keeps lists of Ubuntu archive and image mirrors, and probes them every so often to check that they re reasonably complete and up to date.) This only affected HTTPS mirrors when probed via a proxy server, support for which is a relatively recent feature in Launchpad and involved some code that we never managed to unit-test properly: of course this is exactly the code that went wrong. Sadly I wasn t able to sort out that gap, but at least the fix was simple. Non-MIME-encoded email headers As I mentioned above, there were substantial changes in the email package between Python 2 and 3, and indeed between minor versions of Python 3. Our test coverage here is pretty good, but it s an area where it s very easy to have gaps. We noticed that a script that processes incoming email was crashing on messages with headers that were non-ASCII but not MIME-encoded (and indeed then crashing again when it tried to send a notification of the crash!). The only examples of these I looked at were spam, but we still didn t want to crash on them. The fix involved being somewhat more careful about both the handling of headers returned by Python s email parser and the building of outgoing email notifications. This seems to be working well so far, although I wouldn t be surprised to find the odd other incorrect detail in this sort of area. Failure to handle non-ISO-8859-1 URL-encoded form input Remember how I said that parsing HTTP form data was thorny? After we finished upgrading all our appservers to Python 3, people started reporting that they couldn t post Unicode comments to bugs, which turned out to be only if the attempt was made using JavaScript, and was because I hadn t quite managed to get URL-encoded form data working properly with zope.publisher and multipart. The current standard describes the URL-encoded format for form data as in many ways an aberrant monstrosity , so this was no great surprise. Part of the problem was some very strange choices in zope.publisher dating back to 2004 or earlier, which I attempted to clean up and simplify. The rest was that Python 2 s urlparse.parse_qs unconditionally decodes percent-encoded sequences as ISO-8859-1 if they re passed in as part of a Unicode string, so multipart needs to work around this on Python 2. I m still not completely confident that this is correct in all situations, but at least now that we re on Python 3 everywhere the matrix of cases we need to care about is smaller. Inconsistent marshalling of Loggerhead s disk cache We use Loggerhead for providing web browsing of Bazaar branches. When we upgraded one of its two servers to Python 3, we immediately noticed that the one still on Python 2 was failing to read back its revision information cache, which it stores in a database on disk. (We noticed this because it caused a deployment to fail: when we tried to roll out new code to the instance still on Python 2, Nagios checks had already caused an incompatible cache to be written for one branch from the Python 3 instance.) This turned out to be a similar problem to the pickle issue mentioned above, except this one was with marshal, which I didn t think to look for because it s a relatively obscure module mostly used for internal purposes by Python itself; I m not sure that Loggerhead should really be using it in the first place. The fix was relatively straightforward, complicated mainly by now needing to cope with throwing away unreadable cache data. Ironically, if we d just gone ahead and taken the nominally riskier path of upgrading both servers at the same time, we might never have had a problem here. Intermittent bzr failures Finally, after we upgraded one of our two Bazaar codehosting servers to Python 3, we had a report of intermittent bzr branch hangs. After some digging I found this in our logs:
Traceback (most recent call last):
  ...
  File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/conch/ssh/channel.py", line 136, in addWindowBytes
    self.startWriting()
  File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/lazr/sshserver/session.py", line 88, in startWriting
    resumeProducing()
  File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/internet/process.py", line 894, in resumeProducing
    for p in self.pipes.itervalues():
builtins.AttributeError: 'dict' object has no attribute 'itervalues'
I d seen this before in our git hosting service: it was a bug in Twisted s Python 3 port, fixed after 20.3.0 but unfortunately after the last release that supported Python 2, so we had to backport that patch. Using the same backport dealt with this. Onwards!

7 June 2021

Russ Allbery: Review: Stoneskin

Review: Stoneskin, by K.B. Spangler
Series: Deep Witches #0
Publisher: A Girl and Her Fed Books
Copyright: September 2017
ASIN: B075PHK498
Format: Kindle
Pages: 226
Stoneskin is a prequel to the Deep Witches Trilogy, which is why I have it marked as book 0 of the series. Unlike most prequels, it was written and published before the series and there doesn't seem to be any reason not to read it first. Tembi Moon is an eight-year-old girl from the poor Marumaru area on the planet of Adhama. Humanity has spread to the stars and first terraformed the worlds and then bioformed themselves to live there. The differences are subtle, but Tembi's skin becomes thicker and less sensitive when damaged (either physically or emotionally) and she can close her ears against dust storms. One day, she wakes up in an unknown alley and finds herself on the world of Miha'ana, sixteen thousand light-years away, where she is rescued and brought home by a Witch named Matindi. In this science fiction future, nearly all interstellar travel is done through the Deep. The Deep is not like the typical hand-waved science fiction subspace, most notably in that it's alive. No one is entirely sure where it came from or what sort of creature it is. It sometimes manages to communicate in words, but abstract patterns with feelings attached are more common, and it only communicates with specific people. Those people are Witches, who are chosen by the Deep via some criteria no one understands. Witches can use the Deep to move themselves or anything else around the galaxy. All interstellar logistics rely on them. The basics of Tembi's story are not that unusual; she's been chosen by the Deep to be a Witch. What is remarkable is that she's young and she's poor, completely disconnected from the power structures of the galaxy. But, once chosen, her path as far as the rest of the galaxy is concerned is fixed: she will go to Lancaster to be trained as a Witch. Matindi is able to postpone this for a time by keeping an eye on her, but not forever. I bought this book because of the idea of the Deep, and that is indeed the best part of the book. There is a lot of mystery about its exact nature, most of which is not resolved in this book, but it mostly behaves like a giant and extremely strange dog, and it's awesome. Replacing the various pseudo-scientific explanations for faster than light travel with interactions with a dream-weird giant St. Bernard with too many paws that talks in swirls of colored bubbles and is very eager to please its friends is brilliant. This book covers a lot of years of Tembi's life and is, as advertised, a prelude to a story that is not resolved here. It's a coming of age story in which she does eventually end up at Lancaster, learns and chafes at the elaborate and very conservative structures humans have put in place to try to make interactions with the Deep predictable and reliable, and eventually gets drawn into the politics of war and the question of when people have a responsibility to intervene. Tembi, and the reader, also have many opportunities to get extremely upset at how the Deep is treated and how much entitlement the Witches have about their access and control, although how the Deep feels about it is left for a future book. Not all of this story is as good as the premise. There are some standard coming of age tropes that I'm not fond of, such as Tembi's predictable temporary falling out with the Deep (although the Deep's reaction is entertaining). It's also not at all a complete story, although that's clearly signaled by the subtitle. But as an introduction to the story universe and an extended bit of scene-setting, it accomplishes what it sets out to do. It's also satisfyingly thoughtful about the moral trade-offs around stability and the value of preserving institutions. I know which side I'm on within the universe, but I appreciated how much nuance and thoughtfulness Spangler puts into the contrary opinion. I'm hooked on the universe and want to learn more about the Deep, enough so that I've already bought the first book of the main trilogy. Followed by The Blackwing War. Rating: 7 out of 10

2 June 2021

Sven Hoexter: pulseaudio/alsa and dynamic mic sensitivity in my browser

It's a gross hack but works for now. To prevent overly sensitive mic settings autotuned by the browser in web conferences, I currently edit as root /usr/share/pulseaudio/alsa-mixer/paths/analog-input-internal-mic.conf. Change in [Element Capture] the setting volume from merge to 80. The config block as a whole looks like this:
[Element Capture]
switch = mute
volume = 80
override-map.1 = all
override-map.2 = all-left,all-right
Solution found at https://askubuntu.com/a/761103.

27 May 2021

Wouter Verhelst: Freenode

Bye, Freenode I have been on Freenode for about 20 years, since my earliest involvement with Debian in about 2001. When Debian moved to OFTC for its IRC presence way back in 2006, I hung around on Freenode somewhat since FOSDEM's IRC channels were still there, as well as for a number of other channels that I was on at the time (not anymore though). This is now over and done with. What's happening with Freenode is a shitstorm -- one that could easily have been fixed if one particular person were to step down a few days ago, but by now is a lost cause. At any rate, I'm now lurking, mostly for FOSDEM channels, on libera.chat, under my usual nick, as well as on OFTC.

22 May 2021

Mike Gabriel: Upcoming brainstorming discussion about Debian for the Enterprise

Recently, Raphael Hertzog published ideas [1] about how to make Debian more attractive for big enterprises. One missing key stone here is the possibility to sign up for an enterprise support subscription scheme. Another question tackles how to provide such a support scheme within Debian, without disturbing the current flow of how Debian is developed these days. And, there are likely more questions to asks, riddles to solve, and hurdles to overcome. We want to discuss this topic, brainstorm on it, collect new ideas and also hear your concerns on a public channel. Over the past weeks there already have been mail exchanges off-list. We want to reboot this privately started discussion now in public (as that's where it belongs) starting +/- at the end of the coming week via the currently quite inactive Debian mailing list 'debian-enterprise' [2]. Please join the discussion (and the mailing list) [3] if interested in this topic. light & love
Mike (aka sunweaver) [1] https://raphaelhertzog.com/2021/03/30/challenging-times-for-freexian-1/
(also read parts 2-4)
[2] debian-enterprise@lists.debian.org
[3] https://lists.debian.org/debian-enterprise

21 April 2021

Sven Hoexter: bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.
2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied
You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:
cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf
Discussed upstream here.

1 February 2021

Bits from Debian: Arduino is back on Debian

The Debian Electronics Team is happy to announce that the latest version of Arduino, probably the most widespread platform for programming AVR micro-controllers, is now packaged and uploaded onto Debian unstable. The last version of Arduino that was readily available in Debian was 1.0.5, which dates back to 2013. It's been years of trying and failing but finally, after a great months-long effort from Carsten Schoenert and Rock Storm, we have got a working package for the latest Arduino. After over 7 years now, users will be able to install the Arduino IDE as easy as "apt install arduino" again. "The purpose of this post is not just to announce this new upload but actually more of a request for testing" said Rock Storm. " The title could very well be WANTED: Beta Testers for Arduino (dead or alive :P).". The Debian Electronics Team would appreciate if anyone with the tools and knowledge for it could give the package a try and let us know if he/she finds any issues with it. With this post we thank the Debian Electronics Team and all previous contributors to the package. This feat would have not been achievable without them.

23 December 2020

Sven Hoexter: Jenkins dynamically parameterized pipelins for terraform execution

Jenkins in the Ops space is in general already painful. Lately the deprecation of the multiple-scms plugin caused some headache, becaue we relied heavily on it to generate pipelines in a Seedjob based on structure inside secondary repositories. We kind of started from scratch now and ship parameterized pipelines defined in Jenkinsfiles in those secondary repositories. Basically that is the way it should be, you store the pipeline definition along with code you'd like to execute. In our case that is mostly terraform and ansible. Problem Directory structure is roughly "stage" -> "project" -> "service". We'd like to have one job pipeline per project, which dynamically reads all service folder names and offers those as available parameters. A service folder is the smallest entity we manage with terraform in a separate state file. Now Jenkins pipelines are by intention limited, but you can add some groovy at will if you whitelist the usage in Jenkins. You have to click through some security though to make it work. Jenkinsfile This is basically a commented version of the Jenkinsfile we copy now around as a template, to be manually adjusted per project.
// Syntax: https://jenkins.io/doc/book/pipeline/syntax/
// project name as we use it in the folder structure and job name
def TfProject = "myproject-I-dev"
// directory relative to the repo checkout inside the jenkins workspace
def jobDirectory = "terraform/dev/$ TfProject "
// informational string to describe the stage or project
def stageEnvDescription = "DEV"
/* Attention please if you rebuild the Jenkins instance consider the following:
- You've to run this job at least *thrice*. It first has to checkout the
repository, then you've to add permisions for the groovy part, and on
the third run you can gather the list of available terraform folder.
- As a safeguard the first first folder name is always the invalid string
"choose-one". That prevents accidential execution of a random project.
- If you add new terraform folder you've to run the "choose-one" dummy rollout so
the dynamic parameters pick up the new folder. */
/* Here we hardcode the path to the correct job workspace on the jenkins host, and
   discover the service folder list. We have to filter it slightly to avoid temporary folders created by Jenkins (like @tmp folders). */
List tffolder = new File("/var/lib/jenkins/jobs/terraform $ TfProject /workspace/$ jobDirectory ").listFiles().findAll   it.isDirectory() && it.name ==~ /(?i)[a-z0-9_-]+/  .sort()
/* ensure the "choose-one" dummy entry is always the first in the list, otherwise
   initial executions might execute something. By default the first parameter is
   used if none is selected */
tffolder.add(0,"choose-one")
pipeline  
    agent any
    /* Show a choice parameter with the service directory list we stored
       above in the variable tffolder */
    parameters  
        choice(name: "TFFOLDER", choices: tffolder)
     
    // Configure logrotation and coloring.
    options  
        buildDiscarder(logRotator(daysToKeepStr: "30", numToKeepStr: "100"))
        ansiColor("xterm")
     
    // Set some variables for terraform to pick up the right service account.
    environment  
        GOOGLE_CLOUD_KEYFILE_JSON = '/var/lib/jenkins/cicd.json'
        GOOGLE_APPLICATION_CREDENTIALS = '/var/lib/jenkins/cicd.json'
     
stages  
    stage('TF Plan')  
    /* Make sure on every stage that we only execute if the
       choice parameter is not the dummy one. Ensures we
       can run the pipeline smoothly for re-reading the
       service directories. */
    when   expression   params.TFFOLDER != "choose-one"    
    steps  
        /* Initialize terraform and generate a plan in the selected
           service folder. */
        dir("$ params.TFFOLDER ")  
        sh 'terraform init -no-color -upgrade=true'
        sh 'terraform plan -no-color -out myplan'
         
        // Read in the repo name we act on for informational output.
        script  
            remoteRepo = sh(returnStdout: true, script: 'git remote get-url origin').trim()
         
        echo "INFO: job *$ JOB_NAME * in *$ params.TFFOLDER * on branch *$ GIT_BRANCH * of repo *$ remoteRepo *"
     
     
    stage('TF Apply')  
    /* Run terraform apply only after manual acknowledgement, we have to
       make sure that the when     condition is actually evaluated before
       the input. Default is input before when. */
    when  
        beforeInput true
        expression   params.TFFOLDER != "choose-one"  
     
    input  
        message "Cowboy would you really like to run **$ JOB_NAME ** in **$ params.TFFOLDER **"
        ok "Apply $ JOB_NAME  to $ stageEnvDescription "
     
    steps  
        dir("$ params.TFFOLDER ")  
        sh 'terraform apply -no-color -input=false myplan'
         
     
     
 
    post  
            failure  
                // You can also alert to noisy chat platforms on failures if you like.
                echo "job failed"
             
         
job-dsl side of the story Having all those when conditions in the pipeline stages above allows us to create a dependency between successful Seedjob executions and just let that trigger the execution of the pipeline jobs. This is important because the Seedjob execution itself will reset all pipeline jobs, so your dynamic parameters are gone. By making sure we can re-execute the job, and doing that automatically, we still have up to date parameterized pipelines, whenever the Seedjob ran successfully. The job-dsl script looks like this:
import javaposse.jobdsl.dsl.DslScriptLoader;
import javaposse.jobdsl.plugin.JenkinsJobManagement;
import javaposse.jobdsl.plugin.ExecuteDslScripts;
def params = [
    // Defaults are repo: mycorp/admin, branch: master, jenkinsFilename: Jenkinsfile
    pipelineJobs: [
        [name: 'terraform myproject-I-dev', jenkinsFilename: 'terraform/dev/myproject-I-dev/Jenkinsfile', upstream: 'Seedjob'],
        [name: 'terraform myproject-I-prod', jenkinsFilename: 'terraform/prod/myproject-I-prod/Jenkinsfile', upstream: 'Seedjob'],
    ],
]
params.pipelineJobs.each   job ->
    pipelineJob(job.name)  
        definition  
            cpsScm  
                // assume admin and branch master as a default, look for Jenkinsfile
                def repo = job.repo ?: 'mycorp/admin'
                def branch = job.branch ?: 'master'
                def jenkinsFilename = job.jenkinsFilename ?: 'Jenkinsfile'
                scm  
                    git("ssh://git@github.com/$ repo .git", branch)
                 
                scriptPath(jenkinsFilename)
             
         
        properties  
            pipelineTriggers  
                triggers  
                    if(job.upstream)  
                        upstream  
                            upstreamProjects("$ job.upstream ")
                            threshold('SUCCESS')
                         
                     
                 
             
         
     
 
Disadvantages There are still a bunch of disadvantages you've to consider Jenkins Rebuilds are Painful In general we rebuild our Jenkins instances quite frequently. With the approach outlined here in place, you've to allow the groovy script execution after the first Seedjob execution, and then go through at least another round of run the job, allow permissions, run the job, until it's finally all up and running. Copy around Jenkinsfile Whenever you create a new project you've to copy around Jenkinsfiles for each and every stage and modify the variables at the top accordingly. Keep the Seedjob definitions and Jenkinsfile in Sync You not only have to copy the Jenkinsfile around, but you also have to keep the variables and names in sync with what you define for the Seedjob. Sadly the pipeline env-vars are not available outside of the pipeline when we execute the groovy parts. Kudos This setup was crafted with a lot of help by Michael and Eric.

John Goerzen: Every Storm Runs Out Of Rain

Every storm runs out of rain. Maya Angelou
There are a lot of rain clouds in life these days. May we all remember that days like this one are behind us and also ahead of us. Every storm runs out of rain.
That was the start of a series of photos from my collection & quotes I shared with friends during the initial lockdown in spring. I ll be sharing some here. And here we all are, still dealing with this and it s more severe in a lot of ways. One of my colleagues won t be able to see his parents this Christmas for the first time in over 40 years. But this storm will run out rain. And look how the scene changed, in just a few minutes. This is coming!

21 December 2020

Sven Hoexter: docker buildx sugar - dumping results to disk

The latest docker 20.10.x release unlocks the buildx subcommands which allow for some sugar, like building something in a container and dumping the result to your local directory in one command. Dockerfile
FROM docker-registry.mycorp.com/debian-node:lts as builder
USER service
COPY . /opt/service
RUN cd /opt/service; npm install; npm run build
FROM scratch as dist
COPY --from=builder /opt/service/dist /
build with
docker buildx build --target=dist --output type=local,dest=$(pwd)/pages/ .
Here we build a page, copy the result with all assets from the /opt/service/dist directory to an empty image and dump it into the local pages directory.

20 December 2020

Sven Hoexter: Mock a Serial pty Device with socat

Another note to myself before I forget about this nifty usage of socat again. I was looking for something to mock a serial device, similar to a microcontroller which usually ends up as /dev/ttyACM0 and might output some text. What I found is a very helpful post on stackoverflow showing an example utilizing socat.
$ socat -d -d pty,rawer pty,rawer
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/8
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/11
2020/12/20 21:37:53 socat[29130] N starting data transfer loop with FDs [5,5] and [7,7]
Write whatever you need to the second pty, here /dev/pts/11, e.g.
$ i=0; while :; do echo "foo: $ i " > /dev/pts/11; let i++; sleep 5; done
Now you can listen with whatever you like, e.g. some tool you work on, on the fist pty, here /dev/pts/8. For demonstration purpose just use cat:
$ cat /dev/pts/8
foo: 0
foo: 1
socat is an awesome tool, looking through the manpage you need some knowledge about sockets, but it's incredibly vesatile.

10 December 2020

John Goerzen: How the Attention Economy Hurts You via Social Media Sites like Facebook

There is a whole science to manipulating our attention. And because there is a lot of money to be made by doing this well, it means we all encounter attempts to manipulate what we pay attention to each day. What is this, and how is it harmful? This post will be the first on a series on the topic. Why is attention so important? When people use Facebook, they use it for free. Facebook generally doesn t even try to sell them anything, yet has billions in revenues. What, then, is Facebook s product? Well, really, it s you. Or, more specifically, your attention. Facebook sells your attention to advertisers. Everything they do is in service to that. They want you to spend more time on the site so they can show you more ads. (I should say here that I m using Facebook as an example, but this applies to other social media companies too.) Seeking to maximize attention So if your attention is so important to their profit, it follows naturally that they would seek ways to get people to spend more time on their site. And they do. They track all sorts of metrics, including engagement (if you click like , comment, share, or otherwise interact with content). They know which sorts of things are likely to capture your (and I mean you in specific!) attention and show you that. Your neighbor may have different interests and Facebook judges different things are likely to capture their attention. Manipulating your attention Attention turning into money isn t unique for social media. In fact, in the article If It Bleeds, It Leads: Understanding Fear-Based Media, Psychology Today writes:
In previous decades, the journalistic mission was to report the news as it actually happened, with fairness, balance, and integrity. However, capitalistic motives associated with journalism have forced much of today s television news to look to the spectacular, the stirring, and the controversial as news stories. It s no longer a race to break the story first or get the facts right. Instead, it s to acquire good ratings in order to get advertisers, so that profits soar. News programming uses a hierarchy of if it bleeds, it leads. Fear-based news programming has two aims. The first is to grab the viewer s attention. In the news media, this is called the teaser. The second aim is to persuade the viewer that the solution for reducing the identified fear will be in the news story. If a teaser asks, What s in your tap water that YOU need to know about? a viewer will likely tune in to get the up-to-date information to ensure safety.
You ve probably seen fear-based messages a lot on Facebook. They will highlight messages to liberals about being afraid of what Trump is doing, and to conservatives about being afraid of what Biden is doing. They may or may not even intentionally be doing this; it is their algorithm predicts that those would maximize time and engagement for certain people, so that s what they see. Fear leads to controversy It s not just fear, though. Social media also loves controversy. There s nothing that makes people really want to stay on Facebook like anger. See something controversial and you ll see hundreds or thousands of people are there arguing about it and in the process, giving Facebook their attention. A quick Internet search will show you numerous articles on how marketing companes can leverage controvery to get attention and engagement with their campaigns. Consequences of maximizing fear and controversy What does it mean to society at large and to you personally that large companies make a lot of money by maximizing fear and controversy? The most obvious way is it leads to less common ground. If the posts and reactions that show common ground are never seen because they don t drive engagement, it poisons the well; left and right hate each other with ever more vigor a profitable outcome to Facebook, but a poisonous one to all of us. I have had several friendships lost because I a liberal in agreement with these friends on political matters still talk to Trump voters. On the other side, we ve seen people storm the Michigan statehouse with weapons. How did that level of disagreement and even fear behind it get so firmly embedded in our society? Surely the fact that social media shows us things designed to stimulate fear and anger must play a role. What does it do to our ability to have empathy for, and understand, others? The Facebook groups I ve been in for like-minded people have largely been flooded with memes calling the President rump and other things clearly designed to make people angry or fearful. It s a worthless experience, and not just that, but it s a harmful experience. When our major media TV and social networks all are optimizing for fear, anger, and controvesry, we have a society beholden to fear, anger, and controvesy. In my next installment, I m going to talk about what to do about this, including the decentralized social networks of the Fediverse that are specifically designed to put you back in charge of your attention. Update 2020-12-16: There are two followup articles for this: how to join the Fediverse and non-creepy technology purchasing and gifting guides. The latter references the FSF s page on software manipulation towards addiction, which is particularly relevant to this topic.

8 December 2020

Russell Coker: Links December 2020

Business Insider has an informative article about the way that Google users can get locked out with no apparent reason and no recourse [1]. Something to share with clients when they consider putting everything in the cloud . Vice has an interestoing article about people jailbreaking used Teslas after Tesla has stolen software licenses that were bought with the car [2]. The Atlantic has an interesting article titled This Article Won t Change Your Mind [3]. It s one of many on the topic of echo chambers but has some interesting points that others don t seem to cover, such as regarding the benefits of groups when not everyone agrees. Inequality.org has lots of useful information about global inequality [4]. Jeffrey Goldberg has an insightful interview with Barack Obama for the Atlantic about the future course of American politics and a retrospective on his term in office [5]. A Game Designer s Analysis Of QAnon is an insightful Medium article comparing QAnon to an augmented reality game [6]. This is one of the best analysis of QAnon operations that I ve seen. Decrypting Rita is one of the most interesting web comics I ve read [7]. It makes good use of side scrolling and different layers to tell multiple stories at once. PC Mag has an article about the new features in Chrome 87 to reduce CPU use [8]. On my laptop I have 1/3 of all CPU time being used when it is idle, the majority of which is from Chrome. As the CPU has 2 cores this means the equivalent of 1 core running about 66% of the time just for background tabs. I have over 100 tabs open which I admit is a lot. But it means that the active tabs (as opposed to the plain HTML or PDF ones) are averaging more than 1% CPU time on an i7 which seems obviously unreasonable. So Chrome 87 doesn t seem to live up to Google s claims. The movie Bad President starring Stormy Daniels as herself is out [9]. Poe s Law is passe. Interesting summary of Parler, seems that it was designed by the Russians [10]. Wired has an interesting article about Indistinguishability Obfuscation, how to encrypt the operation of a program [11]. Joerg Jaspert wrote an interesting blog post about the difficulties packagine Rust and Go for Debian [12]. I think that the problem is many modern languages aren t designed well for library updates. This isn t just a problem for Debian, it s a problem for any long term support of software that doesn t involve transferring a complete archive of everything and it s a problem for any disconnected development (remote sites and sites dealing with serious security. Having an automatic system for downloading libraries is fine. But there should be an easy way of getting the same source via an archive format (zip will do as any archive can be converted to any other easily enough) and with version numbers.

22 November 2020

Markus Koschany: My Free Software Activities in October 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in November) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java
pdfsam
Misc Debian LTS This was my 56. month as a paid contributor and I have been paid to work 20,75 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 29. month and I have been paid to work 15 hours on ELTS. Thanks for reading and see you next time.

14 October 2020

Sven Hoexter: Nice Helper to Sanitize File Names - sanity.pl

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the sanity.pl script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl. Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

30 September 2020

Paul Wise: FLOSS Activities September 2020

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors The gensim, cython-blis, python-preshed, pytest-rerunfailures, morfessor, nmslib, visdom and pyemd work was sponsored by my employer. All other work was done on a volunteer basis.

25 September 2020

Colin Watson: Porting Launchpad to Python 3: progress report

Launchpad still requires Python 2, which in 2020 is a bit of a problem. Unlike a lot of the rest of 2020, though, there s good reason to be optimistic about progress. I ve been porting Python 2 code to Python 3 on and off for a long time, from back when I was on the Ubuntu Foundations team and maintaining things like the Ubiquity installer. When I moved to Launchpad in 2015 it was certainly on my mind that this was a large body of code still stuck on Python 2. One option would have been to just accept that and leave it as it is, maybe doing more backporting work over time as support for Python 2 fades away. I ve long been of the opinion that this would doom Launchpad to being unmaintainable in the long run, and since I genuinely love working on Launchpad - I find it an incredibly rewarding project - this wasn t something I was willing to accept. We re already seeing some of our important dependencies dropping support for Python 2, which is perfectly reasonable on their terms but which is starting to become a genuine obstacle to delivering important features when we need new features from newer versions of those dependencies. It also looks as though it may be difficult for us to run on Ubuntu 20.04 LTS (we re currently on 16.04, with an upgrade to 18.04 in progress) as long as we still require Python 2, since we have some system dependencies that 20.04 no longer provides. And then there are exciting new features like type hints and async/await that we d like to be able to use. However, until last year there were so many blockers that even considering a port was barely conceivable. What changed in 2019 was sorting out a trifecta of core dependencies. We ported our database layer, Storm. We upgraded to modern versions of our Zope Toolkit dependencies (after contributing various fixes upstream, including some substantial changes to Zope s test runner that we d carried as local patches for some years). And we ported our Bazaar code hosting infrastructure to Breezy. With all that in place, a port seemed more of a realistic possibility. Still, even with this, it was never going to be a matter of just following some standard porting advice and calling it good. Launchpad has almost a million lines of Python code in its main git tree, and around 250 dependencies of which a number are quite Launchpad-specific. In a project that size, not only is following standard porting advice an extremely time-consuming task in its own right, but just about every strange corner case is going to show up somewhere. (Did you know that StringIO.StringIO(None) and io.StringIO(None) do different things even after you account for the native string vs. Unicode text difference? How about the behaviour of .union() on a subclass of frozenset?) Launchpad s test suite is fortunately extremely thorough, but even just starting up the test suite involves importing most of the data model code, so before you can start taking advantage of it you have to make a large fraction of the codebase be at least syntactically-correct Python 3 code and use only modules that exist in Python 3 while still working in Python 2; in a project this size that turns out to be a large effort on its own, and can be quite risky in places. Canonical s product engineering teams work on a six-month cycle, but it just isn t possible to cram this sort of thing into six months unless you do literally nothing else, and please can we put all feature development on hold while we run to stand still is a pretty tough sell to even the most understanding management. Fortunately, we ve been able to grow the Launchpad team in the last year or so, and so it s been possible to put Python 3 on our roadmap on the understanding that we aren t going to get all the way there in one cycle, while still being able to do other substantial feature development work as well. So, with all that preamble, what have we done this cycle? We ve taken a two-pronged approach. From one end, we identified 147 classes that needed to be ported away from some compatibility code in our database layer that was substantially less friendly to Python 3: we ve ported 38 of those, so there s clearly a fair bit more to do, but we were able to distribute this work out among the team quite effectively. From the other end, it was clear that it would be very inefficient to do general porting work when any attempt to even run the test suite would run straight into the same crashes in the same order, so I set myself a target of getting the test suite to start up, and started hacking on an enormous git branch that I never expected to try to land directly: instead, I felt free to commit just about anything that looked reasonable and moved things forward even if it was very rough, and every so often went back to tidy things up and cherry-pick individual commits into a form that included some kind of explanation and passed existing tests so that I could propose them for review. This strategy has been dramatically more successful than anything I ve tried before at this scale. So far this cycle, considering only Launchpad s main git tree, we ve landed 137 Python-3-relevant merge proposals for a total of 39552 lines of git diff output, keeping our existing tests passing along the way and deploying incrementally to production. We have about 27000 more lines of patch at varying degrees of quality to tidy up and merge. Our main development branch is only perhaps 10 or 20 more patches away from the test suite being able to start up, at which point we ll be able to get a buildbot running so that multiple developers can work on this much more easily and see the effect of their work. With the full unlanded patch stack, about 75% of the test suite passes on Python 3! This still leaves a long tail of several thousand tests to figure out and fix, but it s a much more incrementally-tractable kind of problem than where we started. Finally: the funniest (to me) bug I ve encountered in this effort was the one I encountered in the test runner and fixed in zopefoundation/zope.testrunner#106: IDs of failing tests were written to a pipe, so if you have a test suite that s large enough and broken enough then eventually that pipe would reach its capacity and your test runner would just give up and hang. Pretty annoying when it meant an overnight test run didn t give useful results, but also eloquent commentary of sorts.

Next.

Previous.